floating point underflow - définition. Qu'est-ce que floating point underflow
Diclib.com
Dictionnaire ChatGPT
Entrez un mot ou une phrase dans n'importe quelle langue 👆
Langue:

Traduction et analyse de mots par intelligence artificielle ChatGPT

Sur cette page, vous pouvez obtenir une analyse détaillée d'un mot ou d'une phrase, réalisée à l'aide de la meilleure technologie d'intelligence artificielle à ce jour:

  • comment le mot est utilisé
  • fréquence d'utilisation
  • il est utilisé plus souvent dans le discours oral ou écrit
  • options de traduction de mots
  • exemples d'utilisation (plusieurs phrases avec traduction)
  • étymologie

Qu'est-ce (qui) est floating point underflow - définition

NUMBER REPRESENTATION
IBM Floating Point Standard; IBM floating point architecture; IBM floating-point architecture; IBM Floating Point Architecture; Hexadecimal floating point (IBM); IBM Hexadecimal Floating Point; IBM hexadecimal floating point

floating point underflow      
floating-point         
  • Single-precision floating-point numbers on a [[number line]]: the green lines mark representable values.
  • none
  • signs]] of representable values
  • Z3]] computer, which uses a 22-bit binary floating-point representation
  • [[Leonardo Torres y Quevedo]], who proposed a form of floating point in 1914
  • Fig. 1: resistances in parallel, with total resistance <math>R_{tot}</math>
COMPUTER FORMAT FOR REPRESENTING RATIONAL NUMBERS
Floating-point; Floating-point number; Floating point number; Hidden bit; Floating point type; Floating point numbers; Floating point arithmetic; Floating-point error; Floating point value; Numeric (data type); Floating point error; Floating-point math; Float (computing); Floating point exception; Floating-Point; Finite precision arithmetics; Floating-point numbers; Floating decimal point; Floating point format; Floating-point format; Floating point representation; Floating-point representation; Floating-point arithmetics; Floating point arithmetics; Floating point; Binary floating point; Assumed bit; Implicit bit; Assumed bit (floating point); Hidden bit (floating point); Implicit bit (floating point); Leading bit (floating point); Implicit leading bit (floating point); Implicit leading bit; Implicit leading bit convention; Assumed bit convention; Leading bit convention; Implicit bit convention; Hidden bit convention; Hidden bit rule; Implicit bit rule; Implicit leading bit rule; Assumed bit rule; Leading bit rule; Binary floating-point; Octal floating point; Octal floating-point; Binary floating-point arithmetic; Binary floating-point number; Octal floating-point number; Octal floating-point arithmetic; Base 2 floating point; Base-2 floating point; Radix-2 floating point; Radix 2 floating point; Base 8 floating point; Base-8 floating point; Radix-8 floating point; Radix 8 floating point; Binary512; Radix 65536 floating point; Radix-65536 floating point; Base 65536 floating point; Base-65536 floating point; Base-256 floating point; Quaternary floating point; Base 256 floating point; Radix 256 floating point; Radix-256 floating point; Base 4 floating point; Base-4 floating point; Radix 4 floating point; Radix-4 floating point; Binary floating point number; Representable floating-point number; Fast math; Floating point math; Binary floating-point number system; Binary floating point number system; Binary floating point numbering system; Binary floating-point numbering system
<programming, mathematics> A number representation consisting of a mantissa, M, an exponent, E, and a radix (or "base"). The number represented is M*R^E where R is the radix. In science and engineering, exponential notation or scientific notation uses a radix of ten so, for example, the number 93,000,000 might be written 9.3 x 10^7 (where ^7 is superscript 7). In computer hardware, floating point numbers are usually represented with a radix of two since the mantissa and exponent are stored in binary, though many different representations could be used. The IEEE specify a standard representation which is used by many hardware floating-point systems. Non-zero numbers are normalised so that the binary point is immediately before the most significant bit of the mantissa. Since the number is non-zero, this bit must be a one so it need not be stored. A fixed "bias" is added to the exponent so that positive and negative exponents can be represented without a sign bit. Finally, extreme values of exponent (all zeros and all ones) are used to represent special numbers like zero and positive and negative infinity. In programming languages with explicit typing, floating-point types are introduced with the keyword "float" or sometimes "double" for a higher precision type. See also floating-point accelerator, floating-point unit. Opposite: fixed-point. (2008-06-13)
floating-point         
  • Single-precision floating-point numbers on a [[number line]]: the green lines mark representable values.
  • none
  • signs]] of representable values
  • Z3]] computer, which uses a 22-bit binary floating-point representation
  • [[Leonardo Torres y Quevedo]], who proposed a form of floating point in 1914
  • Fig. 1: resistances in parallel, with total resistance <math>R_{tot}</math>
COMPUTER FORMAT FOR REPRESENTING RATIONAL NUMBERS
Floating-point; Floating-point number; Floating point number; Hidden bit; Floating point type; Floating point numbers; Floating point arithmetic; Floating-point error; Floating point value; Numeric (data type); Floating point error; Floating-point math; Float (computing); Floating point exception; Floating-Point; Finite precision arithmetics; Floating-point numbers; Floating decimal point; Floating point format; Floating-point format; Floating point representation; Floating-point representation; Floating-point arithmetics; Floating point arithmetics; Floating point; Binary floating point; Assumed bit; Implicit bit; Assumed bit (floating point); Hidden bit (floating point); Implicit bit (floating point); Leading bit (floating point); Implicit leading bit (floating point); Implicit leading bit; Implicit leading bit convention; Assumed bit convention; Leading bit convention; Implicit bit convention; Hidden bit convention; Hidden bit rule; Implicit bit rule; Implicit leading bit rule; Assumed bit rule; Leading bit rule; Binary floating-point; Octal floating point; Octal floating-point; Binary floating-point arithmetic; Binary floating-point number; Octal floating-point number; Octal floating-point arithmetic; Base 2 floating point; Base-2 floating point; Radix-2 floating point; Radix 2 floating point; Base 8 floating point; Base-8 floating point; Radix-8 floating point; Radix 8 floating point; Binary512; Radix 65536 floating point; Radix-65536 floating point; Base 65536 floating point; Base-65536 floating point; Base-256 floating point; Quaternary floating point; Base 256 floating point; Radix 256 floating point; Radix-256 floating point; Base 4 floating point; Base-4 floating point; Radix 4 floating point; Radix-4 floating point; Binary floating point number; Representable floating-point number; Fast math; Floating point math; Binary floating-point number system; Binary floating point number system; Binary floating point numbering system; Binary floating-point numbering system
¦ noun [as modifier] Computing denoting a mode of representing numbers as two sequences of bits, one representing the digits in the number and the other an exponent which determines the position of the radix point.

Wikipédia

IBM hexadecimal floating-point

Hexadecimal floating point (now called HFP by IBM) is a format for encoding floating-point numbers first introduced on the IBM System/360 computers, and supported on subsequent machines based on that architecture, as well as machines which were intended to be application-compatible with System/360.

In comparison to IEEE 754 floating point, the HFP format has a longer significand, and a shorter exponent. All HFP formats have 7 bits of exponent with a bias of 64. The normalized range of representable numbers is from 16−65 to 1663 (approx. 5.39761 × 10−79 to 7.237005 × 1075).

The number is represented as the following formula: (−1)sign × 0.significand × 16exponent−64.